41 research outputs found

    End-to-End Learning of Video Super-Resolution with Motion Compensation

    Full text link
    Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video super-resolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video super-resolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.Comment: Accepted to GCPR201

    Motion Deblurring in the Wild

    Full text link
    The task of image deblurring is a very ill-posed problem as both the image and the blur are unknown. Moreover, when pictures are taken in the wild, this task becomes even more challenging due to the blur varying spatially and the occlusions between the object. Due to the complexity of the general image model we propose a novel convolutional network architecture which directly generates the sharp image.This network is built in three stages, and exploits the benefits of pyramid schemes often used in blind deconvolution. One of the main difficulties in training such a network is to design a suitable dataset. While useful data can be obtained by synthetically blurring a collection of images, more realistic data must be collected in the wild. To obtain such data we use a high frame rate video camera and keep one frame as the sharp image and frame average as the corresponding blurred image. We show that this realistic dataset is key in achieving state-of-the-art performance and dealing with occlusions

    Simple, Accurate, and Robust Nonparametric Blind Super-Resolution

    Full text link
    This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed on the super-resolved, sharp image and the blur-kernel, which is shown to be quite beneficial for estimating the blur-kernel accurately. The numerical optimization is implemented by coupling the splitting augmented Lagrangian and the conjugate gradient (CG). Using the pre-estimated blur-kernel, we finally reconstruct the SR image by a very simple non-blind SR method that uses a natural image prior. The proposed approach is demonstrated to achieve better performance than the recent method by Michaeli and Irani [2] in both terms of the kernel estimation accuracy and image SR quality

    Visualizing Escherichia coli Sub-Cellular Structure Using Sparse Deconvolution Spatial Light Interference Tomography

    Get PDF
    Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM). In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT), to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner

    Super-resolution:A comprehensive survey

    Get PDF

    Deblurring natural image using super-gaussian fields

    No full text
    Blind image deblurring is a challenging problem due to its ill-posed nature, of which the success is closely related to a proper image prior. Although a large number of sparsity-based priors, such as the sparse gradient prior, have been successfully applied for blind image deblurring, they inherently suffer from several drawbacks, limiting their applications. Existing sparsity-based priors are usually rooted in modeling the response of images to some specific filters (e.g., image gradients), which are insufficient to capture the complicated image structures. Moreover, the traditional sparse priors or regularizations model the filter response (e.g., image gradients) independently and thus fail to depict the long-range correlation among them. To address the above issues, we present a novel image prior for image deblurring based on a Super-Gaussian field model with adaptive structures. Instead of modeling the response of the fixed short-term filters, the proposed Super-Gaussian fields capture the complicated structures in natural images by integrating potentials on all cliques (e.g., centring at each pixel) into a joint probabilistic distribution. Considering that the fixed filters in different scales are impractical for the coarse-to-fine framework of image deblurring, we define each potential function as a super-Gaussian distribution. Through this definition, the partition function, the curse for traditional MRFs, can be theoretically ignored, and all model parameters of the proposed Super-Gaussian fields can be data-adaptively learned and inferred from the blurred observation with a variational framework. Extensive experiments on both blind deblurring and non-blind deblurring demonstrate the effectiveness of the proposed method.Yuhang Liu, Wenyong Dong, Dong Gong, Lei Zhang, Qinfeng Sh

    Robust Kernelized Bayesian Matrix Factorization for Video Background/Foreground Separation

    Full text link
    © Springer Nature Switzerland AG 2019. Development of effective and efficient techniques for video analysis is an important research area in machine learning and computer vision. Matrix factorization (MF) is a powerful tool to perform such tasks. In this contribution, we present a hierarchical robust kernelized Bayesian matrix factorization (RKBMF) model to decompose a data set into low rank and sparse components. The RKBMF model automatically infers the parameters and latent variables including the reduced rank using variational Bayesian inference. Moreover, the model integrates the side information of similarity between frames to improve information extraction from the video. We employ RKBMF to extract background and foreground information from a traffic video. Experimental results demonstrate that RKBMF outperforms state-of-the-art approaches for background/foreground separation, particularly where the video is contaminated
    corecore